Semantic-guided Reinforced Region Embedding for Generalized Zero-Shot Learning

نویسندگان

چکیده

Generalized zero-shot Learning (GZSL) aims to recognize images from either seen or unseen domain, mainly by learning a joint embedding space associate image features with the corresponding category descriptions. Recent methods have proved that localizing important object regions can effectively bridge semantic-visual gap. However, these are all based on one-off visual localizers, lacking of interpretability and flexibility. In this paper, we propose novel Semantic-guided Reinforced Region Embedding (SR2E) network localize objects in long-term interests construct space. SR2E consists Module (R2M) Semantic Alignment (SAM). First, without annotated bounding box as supervision, R2M encodes semantic guidance into reward punishment criteria teach localizer serialized region searching. Besides, explores different action spaces during searching path avoid local optimal localization, which thereby generates discriminative less redundancy. Second, SAM preserves relationship via alignment designs domain detector alleviate confusion. Experiments four public benchmarks demonstrate proposed is an effective GZSL method reinforced space, obtains averaged 6.1% improvements.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Transductive Unbiased Embedding for Zero-Shot Learning

Most existing Zero-Shot Learning (ZSL) methods have the strong bias problem, in which instances of unseen (target) classes tend to be categorized as one of the seen (source) classes. So they yield poor performance after being deployed in the generalized ZSL settings. In this paper, we propose a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bi...

متن کامل

Zero-Shot Learning for Semantic Utterance Classification

We propose a novel zero-shot learning method for semantic utterance classification (SUC). It learns a classifier f : X → Y for problems where none of the semantic categories Y are present in the training set. The framework uncovers the link between categories and utterances through a semantic space. We show that this semantic space can be learned by deep neural networks trained on large amounts...

متن کامل

Preserving Semantic Relations for Zero-Shot Learning

Zero-shot learning has gained popularity due to its potential to scale recognition models without requiring additional training data. This is usually achieved by associating categories with their semantic information like attributes. However, we believe that the potential offered by this paradigm is not yet fully exploited. In this work, we propose to utilize the structure of the space spanned ...

متن کامل

LONG, LIU, SHAO: ATTRIBUTE EMBEDDING WITH VSAR FOR ZERO-SHOT LEARNING 1 Attribute Embedding with Visual-Semantic Ambiguity Removal for Zero-shot Learning

Conventional zero-shot learning (ZSL) methods recognise an unseen instance by projecting its visual features to a semantic space that is shared by both seen and unseen categories. However, we observe that such a one-way paradigm suffers from the visualsemantic ambiguity problem. Namely, the semantic concepts (e.g. attributes) cannot explicitly correspond to visual patterns, and vice versa. Such...

متن کامل

Semantic Graph for Zero-Shot Learning

Zero-shot learning aims to classify visual objects without any training data via knowledge transfer between seen and unseen classes. This is typically achieved by exploring a semantic embedding space where the seen and unseen classes can be related. Previous works differ in what embedding space is used and how different classes and a test image can be related. In this paper, we utilize the anno...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i2.16230